約 4,265,082 件
https://w.atwiki.jp/imas/pages/717.html
てんじゃP 名前は知らずともお世話になっている人多数?週マスを支える縁の下のヘラクレスP 慎み深く公平性を保つため自作の除外を願い出る徹底振り ただ懐ゲー曲に対し異様な執着心を持つという一面が徐々に表に出始めるようになっている 普段は画力を生かした描いてみた系の作品が中心 またデータ収集能力の方も生かし「クイズこのPは誰でしょう」シリーズ連載中 最新作 キャメルトライ!? これはッ! 真性アーケードゲーマーの香りがプンプンするぜッ! クイズこのPは誰でしょうシリーズ 動画付けられた様々なタグによってPを当てるいたってシンプルなクイズ なおこのシリーズ、懐ゲー好きには作業用BGM動画としても利用可能 シリーズ第三弾。やや難易度アップ、入力は正確に! シリーズ第二段。君はクイズで涙したことがあるか! シリーズ第一段。君は何人答えられる? 代表作 代表作ながらそっと評価してあげて欲しい一作。こ…小鳥さん! その他の動画 アイドル達で懐かしのF1GPのOPを再現 jajaPによる「アイドルマスター ディスクライター」のリスペクト作。 上記作から先に御覧になる事をお勧めする、それにしても狭すぐる… 週マスSP支援動画。wiki見たみんなも桃月Pには内緒にしてね! 薄幸Pリスペクト動画、なお使用された食材はてんじゃPが美味しく頂きました? ランキング制作の裏側の撮影に成功したお宝マル秘VTR ニコ動一覧 タグ てんじゃP ニコニコ大百科-てんじゃP 投稿リスト 96P 作者ブログ ぷろじぇくと@あいます NICOM@S STYLE 作業環境 Ulead Video Studio 12 Adobe PhotoShop CS3 Adobe Illustrator CS3 Adobe Flash CS3 Professional Adobe After Effects CS3 e-frontier Shade 6 Spirit CELSYS ComicStudio 3.0 Debut Sound Engine Free Audacity GarageBand flvenc (エンコ設定はAdjust Quantizerをminを1桁、maxを15~30で設定しています/静止画使った動画はビットレート食わなし、容量も少ないので、その分をブロックノイズ減少に費やしてます) x264 ffmpeg Microsoft Excel2007 秀丸 WinSCP TeraTerm(+ttssh2) R perlニコニコ動画情報取得用perlモジュール(自作) Plaggerニコニコランキング取得プラグイン(自作) タグ検索結果取得プラグイン(自作) 作者コメ、タグ、再生数、コメント数取得プラグイン(自作) リストHTML化プラグイン(自作) リストExcel書き出しプラグイン(自作) Excel付メール送信プラグイン(自作) DB登録プラグイン(自作) 新着動画上位N件抽出プラグイン(自作) ブログ登録プラグイン(自作) MySQL タグ一覧: P名 P名_て サイト持ちP デビュー2007.9下旬 制作環境公開P 大百科収録P 投稿数10作品以上
https://w.atwiki.jp/how_to_use_ffmpegx/pages/13.html
6. Options tab ここでやること: 映像コデックのオプション。 このタブの内容は選択したエンジン/コデックにより大幅に変わる。また、同じ設定名でも挙動が異なる箇所もある ここでは、MEncoder-XvidとH264(MEncoder-x264)のみ解説。 6. Options tab6.1. Xvid MEncoderQuantizerとは 6.2. MP4 H.264x264のQuantizer 留意点 6.1. Xvid MEncoder 左列 High Quality 動き予測の精度。速度は概ね3倍程度変わるが、画質は向上し、ファイルサイズ(=必要なビットレート)も減る。 Use B-frames Bフレームの使用。一般にBが多すぎると画質が下がるが、ffmpegXは最小限しか使わないのでさほどでもない。 ファイルサイズ(=必要なビットレート)も減る。 Cartoon content 使わない方が画質は良い。 印象としては、隣接ピクセルを同系色で塗り潰そうとする傾向がある。バグス・バニーあたりの「カートゥーン」には良さげだが、「アニメ」や「アニメーション」には向かない。 Interlaced content 素材のインターレースをそのまま残す。または、プログレッシブ素材にインターレースをかける。 Two pass encoding 2パスエンコード。1stは2ndのほぼ2倍速で進むのでエンコード時間は2倍にはならない。 Trellis quantization 若干ファイルサイズ(=必要なビットレート)が減る。 具体的な理屈は不明。 Quarter pixel ME(qpel) 1/4ピクセル精度の動き予測。 効果は素材によって違うため、確認してから使ったほうが良い。 また、古い再生チップは非対応な事があるので、家電再生機で見る場合は確認した方が良い。 右列 Scaling method 映像の拡大縮小の方式。 上のものほど早く、下の物程遅い。 720x480を640x480にする場合、どれでも大差ない。 bilinear;縮小向き bicubic;拡大向き、ターミナルでMEncoderを使う場合のデフォルト lanczos以下;映像をシャープにする。拡大する場合に良い。 Keyframe interval I-frame(キーフレーム)間隔の選択。I-frame はデータが大きい。 シークはキーフレーム間でのみ可能。 XviDは30fpsなら最大300、24fpsなら240まで。それ以上にすると画質低下。 QMin Quantizerの最小値*。範囲は1-31だが実用上は2-31が適切。整数のみ。下記参照。 QMax Quantizerの最大値*。範囲は1-31だが実用上は2-31が適切。整数のみ。下記参照。 5 sec test clip 5秒だけエンコード。設定確認に使う。 Print PSNR エンコード結果のログファイルを書き出す。PSNR(peak signal tonoiseratio)は高い程良い。単位はデシベル。 5 sec test clipと一緒に使って設定確認に使う。 Quantizerとは 画質劣化度とでも思えば良いだろう。数値が高いほど高圧縮、低画質。 Xvidは指定されたビットレートに収まるよう、各フレーム毎になるたけ低いQuantizerを使おうとする。 1パスは全てのフレームが指定ビットレートに収まるようにするので画質はよくない。 2パスは、1stで一旦映像全体を分析し、映像全体で指定ビットレートに収まるようにビットレートを配分する。動きまくったり、ディテイルの細かい部分には手厚く、ソレ以外のとこには薄く。配分の方針はコデックや他の要素にも左右される。通常はデフォルトの2-9で問題ないハズ。 QMin=QMaxの1パスで、画質固定エンコードになる。4-4でDivxコンポーネントや3ivxの言う「元画質の90%」になる。 もしこれでできたファイルの映像ビットレートが1500kbpsになったら、1500kbpsで2パスかけると、同じファイルサイズでより画質の良いものができる。 6.2. MP4 H.264 ffmpegXが内蔵するH.264コデックはx264というオープンソース、フリーのコデック。ISO MPEG-4 AVC規格(別名H.264)準拠。 比較的新しい為、ノウハウが少なく、設定方式も異なる。0.0.9tでは音ズレが発生したり、B-フレームを使うとQuickTime Player 7 proで再生はできるが加工不能になるなどの問題が生じている。 左列 Use CABAC エンコード、デコード共に速度低下(CPU負荷が高くなる)。サイズは縮む。 Use B-frames Bフレームの使用。一般にBが多すぎると画質が下がるが、ffmpegXは最小限しか使わないのでさほどでもない。 ファイルサイズ(=必要なビットレート)も減る。 使うとQuickTime Player 7 proで再生はできるが加工不能になる。 Constant bitrate オン(デフォルト);ビットレート固定エンコード。QMin〜QMaxの間でQuantizer値を変動させながら指定ビットレートを守る。 オフ;画質固定エンコード。下記参照。 Two pass encoding 2パスエンコード。0.0.9tでは1stと2ndは等速。 MEncoderでは既に1stを2ndの2〜4倍速にするturboモードがあるので、いずれ採用されるだろう。 右列 Scaling method Xvidの項参照 ME function 動き予測の範囲。 XvidのHigh Qualityと効果はほぼ同じで上から速いもの順。 Hexagonの速度は概ねXvidと大差ない。 ExhaustiveはG5-2Ghz Dualで1フレームに2秒近くかかる、、、たのしひ。 I-rames interval I-frame間隔の選択。I-frame はデータが大きい。 AVC/H.264ではI-frame=キーフレームではなくなった(早送りや巻き戻しに使えないI-frameができた) ffmpegXの120が適正かどうかはなんとも言えない。ターミナルでMEncoderを使う場合のデフォルトは250だが、これはEUのDVDに合わせた数値に思える。 QMin Quantizerの最小値*。範囲は2-51。整数のみ。下記参照。 QMax Quantizerの最小値*。範囲は2-51。整数のみ。下記参照。 Print PSNR Xvidの項参照 x264のQuantizer 目盛りが対数尺。つっても解っちゃいない。 q=20とq=40ではビットレートの差は10しか無いとか言うから、変動幅が小さいのだろう。 XvidのQuantizerの感覚からすると2.3とか、小数点で細かく指定できると思っておけば良いのはないか。 留意点 1パス;Constant bitrateオン、ビットレートを指定。 2パス;Constant bitrateオン、ビットレートを指定。 画質固定1パス;Constant bitrateオフ、ビットレートは無視する。 qmax=qmin=22。21 か 20 でもうちょびっと高画質(あまり低くしすぎてもそんなに意味が無い)。 なお、MEncoderのx264にはロスレスの設定も存在するので、今後の対応もあるかも知れない。 名前 コメント
https://w.atwiki.jp/nicoratch/pages/822.html
概要 DJコントローラー初の4ch仕様。リアルミキサー機能ありでQuattro用にデザインされた特別バージョンのVirtual DJ 7同梱。 スペック表 Mixer 4-channels mixer panel. Crossfader. 45mm. linefaders 3-band EQ. Channel gain control. LPF ~ HPF filtering. X-Fader assignment side. Player section Big Transport buttons for the functions PLAY, CUE and SYNC. Large wheels with adjustable "Touch Sensitive" system. Three Jog Wheel modes Vinyl, CDJ and Search. The Loop Bank has 4 buttons to adjust the loop. High resolution 60mm 14-bit pitch faders. Hot Cue Bank with 4 buttons. The Samples/Effects Bank includes 4 buttons and 4 knobs. Lit SP and FX buttons for an easy mode change. Master section Master level knob. Headphones level knob. Headphones CUE / MASTER Mix knob. Microphone level knob. Crossfader curve adjust switch. Select input line switch (x2). Encoder-Push type "Browser" to easily search of library and tracks. 4-buttons Load track. Input 1/Input 2 gain knob Power switch. Inputs/Outputs 2 stereo line (RCA), with phono preamp. Mic (1/4"). Headphones (Stereo 1/4"). Master (Stereo XLR). Master (Stereo RCA). Booth (Stereo RCA). USB type B (slave). 7,5 VDC. Virtual DJ7LE minimum PC System requirements Intel® Core™ 2 or AMD Athlon™ X2 1280x1024 Screen resolution Multi-channel DirectX compatible soundcard 1024MB RAM 200MB free on the hard drive. OS a) Minimum Microsoft® Windows XP SP3. b) Recommended Microsoft® Windows 7 Professional 32-bit Minimum MAC System requirements for Virtual DJ7LE Intel® processor Mac OS X v10.6.x 1024x768 (minimum) / 1440x900 (recommended) Screen resolution. CoreAudio compatible soundcard 1024MB RAM (minimum) / 2048MB (2Gb) RAM (recommended) 50MB (minimum) / 200MB (recommended) free on the hard drive Supported Operating System and Processor Platforms a) Minimum Mac OS X v10.5 Leopard Intel processor platform. b) Recommended Mac OS X v10.6.x Snow Leopard Intel processor platform Audio specs Output level Balanced (XLR) 1.2V+-0.2V. Unbalanced (Master, RCA) 1.2V+-0.2V Unbalanced (Booth, RCA) 1.2V+-0.2V Headphones (1/4") 600mV (w/ load of 32 ohm). SNR Balanced (XLR) 80 dB (A-weighted) Unbalanced (Master, RCA) 80 dB (A-weighted) Unbalanced (Booth, RCA) 80 dB (A-weighted) Headphones 80 dB (A-weighted) THD+N Balanced (XLR) 0.03% Unbalanced (Master, RCA) 0.03% Unbalanced (Booth, RCA) 0.03% Headphones 0.03% Crosstalk 80 dB Frequency response 20 Hz - 20 KHz (+/- 1.5 dB) Sample rate 44.1 KHz Data conversion 16-bit General Power 7,5V DC, 2A Dimensions (LxWxH) 474mm x 334mm x 68mm Weight 2,68kg Included Akiyama QUATTRO midi controller. Akiyama CD (Virtual DJ LE 7 Included). AC/DC Power supply. USB Cable. Virtual DJ 7 Traktor PRO 2.5 Quick Start Guide. Traktor PRO 2 Overlay. 価格 €199.00(新品) Quattro http //www.akiyamadj.com/product_info1.php?products_model=QUATTRO
https://w.atwiki.jp/nicoratch/pages/1028.html
概要 ±20%ピッチコントロール、33,45,78回転にUSB出力が搭載されたタンテ。録音用ソフト(Audacity)とカートリッジ(Audio Technica製)同梱。多分ハンピンOEM。 スペック表 High Torque direct drive motor Fast start / stop 3 speeds 33, 45, 78 rpm with quartz lock 2 Pitch adjustment ranges +/-10% ; +/-20% Forward / reverse play USB-output for direct recording to any PC Free, multilingual Audacity software available via download Endless possibilities with free downloadable effect plug-ins Import, export and edit Ogg Vorbis, MP3, AIFF and WAV sound files. Cut, copy, splice, and mix sounds together. Change the speed or pitch of a recording. Remove static, hiss, hum, or other constant background noises. Alter frequencies with Equalization, FFT Filter, and Bass Boost effects. Adjust volumes with Compressor, Amplify, and Normalize effects. Many more… Selectable phono/line output All functions are equipped with stylish, blue LED lights Plastic dust cover Retractable target light Audio Technica cartridge included All metal S-shaped tone arm assembly with Counterweight Anti-skating adjustment Lever lift with height adjustment Adjustable large feet for perfect leveling General ApplicationDJ Club, Stage Rental General Dimensions (cm)45 x 35.2 x 15.7 cm General Weight (kg)10.5 General PowerinputAC 230V, 50Hz General Energy LabelNo General Power consumption W11 General Colorblack General Displaynone Connections input- Connections output- Connections rcaYes Connections usbYes Function pitch controlYes BrandJB SYSTEMS EANCode5420025603539 DiscontinuedNo Q3usb https //jb-systems.eu/q3usb
https://w.atwiki.jp/submarine/pages/176.html
ソフトウェアレビュー NeuQuant NeuQuantアルゴリズムは以前気になってブックマークしておいた 減色アルゴリズム。メディアンカット減色の比較のために使ってみました。 NeuQuant Fast High-Quality Image Quantization OS別に各種ソフトウェアで公開されているが下記Windows版を使用 http //www.libpng.org/pub/png/apps/pngquant.html pngファイルを使用してコマンドラインより pngquant -nofs 256 aaa.png でディザなし256色で aaa-or8.png という変換ファイルができます。 アルゴリズムが64色~256色限定という事なんだけど、 こうやって比較してみると思った程、綺麗でもなかった。 上記サイトのビリヤードの3DCGなどでは自作アルゴリズムより 格段と綺麗なのがわかったのですが、向き不向きあるのかな? 3D CGでの検証も必要かもしれません。 Related Link フルカラーから256色への減色
https://w.atwiki.jp/selflearn/pages/54.html
Quantity Always Trumps Quality - 「量」はいつだって「質」に勝る 開始日 2008年08月09日 翻訳完了日 2008年08月09日 最終更新日(ちょこちょこ直したり) 2008年08月09日 はじめに 有名なブログRadium Softwareの記事で、「質より量に学ぶ」というのがありました。リンク先を読んでもらえれば分かるのですが、ともかく今のボクにはハッとさせられる内容でした。 さて、その中に同記事の元ネタへのリンクが張られていました。 Coding Horror - Quantity Always Trumps Quality 自戒がてら、この記事を翻訳してみようと思います。 原著 「Quantity Always Trumps Quality」 http //www.codinghorror.com/blog/archives/001160.html 注意 もともと個人利用を目的として日本語化したために、けっこう意訳している部分があります。「意味分からないよ」とか「おかしいんじゃない?」とかいうのがあれば、コメントで指摘してもらえると嬉しいです(がんばって調べます)。 更新履歴 2008/08/09 作成開始&作成完了。考えたけれど、「量」というよりは「経験」と捉えた方がしっくり来るね。 訳文 August 02, 2008 Quantity Always Trumps Quality 「量」はいつだって「質」に勝る Nathan Bowers pointed me to this five year old Cool Tools entry on the book Art Fear. Nathan Bowersはボクにこの5年も昔に書かれたCool Toolsのエントリを紹介してくれたんだ。Art Fearという本についての記事さ。 Although I am not at all ready to call software development "art" -- perhaps "craft" would be more appropriate, or "engineering" if you re feeling generous -- the parallels between some of the advice offered here and my experience writing software are profound. ソフトウェア開発が「芸術」だなんてとても思えないけれど(多分「工芸」がより近いんじゃないかな。もしくは「エンジニアリング」とかならどうだい?)、いくつかの含蓄はボクのソフトウェア開発で得た経験を言い得ているんだよ。 The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple on the final day of class he would bring in his bathroom scales and weigh the work of the "quantity" group fifty pound of pots rated an "A", forty pounds a "B", and so on. Those being graded on "quality", however, needed to produce only one pot - albeit a perfect one - to get an "A". Well, came grading time and a curious fact emerged the works of highest quality were all produced by the group being graded for quantity. It seems that while the "quantity" group was busily churning out piles of work - and learning from their mistakes - the "quality" group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay. 陶芸の授業が始まると、教師はクラスを2つのグループに分けました。そして、工房の左側の生徒は 作品を作成した「量」によってのみ評価し、右側の生徒は作品の「質」で評価すると伝えたのです。 教師のやり方はいたってシンプルでした。 「量」グループの作品は、クラスの最終日にバスルームで重さを測り、50ポンドはA評価で40ポンド はB評価、と評価する。「質」グループは唯一の、ただし最高の1個を作れば良く、その質で採点する。 ところが、実際に評価の段階になると興味深い事実が浮かび上がりました。:もっとも高い質を持つ 作品はすべて「量」グループによって作られたものだったのです。どうやら「量」グループは幾つもの 作品をこねくりまわしているうちに間違いから色々学んでいたようで、一方「質」グループはその間 完全性の理屈をこねくりまわし、けっきょく頭でっかちの理論と死んだ粘土の固まりしか生み出せな かったのです。 Where have I heard this before? どこかで聞いたことのある話だよね? Stop theorizing. Write lots of software. Learn from your mistakes. 考えすぎるな とにかく沢山プログラムを書こう 間違いから学ぶものさ Quantity always trumps quality. That s why the one bit of advice I always give aspiring bloggers is to pick a schedule and stick with it. It s the only advice that matters, because until you ve mentally committed to doing it over and over, you will not improve. You can t. 「量」はいつだって「質」に勝るものさ。ボクは計画することと続けることをどん欲なブロガー達にいつもアドバイスとし続けているのはそういうわけなんだ。頭の中で考えているだけなら、いつまでたっても成長しない。成長できないってことさ。 When it comes to software, the same rule applies. If you aren t building, you aren t learning. Rather than agonizing over whether you re building the right thing, just build it. And if that one doesn t work, keep building until you get one that does. ソフトウェアについても同じルールが当てはまる。作らなければ、学べないんだよ。正しいものを作ろうと悩み続けるくらいなら、やってみればいいじゃないか(Just build it)。もしうまくできなかったとしても、出来るまで作り続ければいい。 Posted by Jeff Atwood ( - ) 名前 コメント
https://w.atwiki.jp/matchmove/pages/78.html
Stabilization In this section, we’ll go into SynthEyes’ stabilization system in depth, and describe some of the nifty things that can be done with it. If we wanted, we could have a single button “Stabilize this!” that would quickly and reliably do a bad job almost all the time. If that s what you’re looking for, there are some other software packages that will be happy to oblige. In SynthEyes, we have provided a rich toolset to get outstanding results in a wide variety of situations. You might wonder why we’ve buried such a wonderful and significant capability quite so far into the manual. The answer is simple in the hopes that you’ve actually read some of the manual, because effectively using the stabilizer will require that you know a number of SynthEyes concepts, and how to use the SynthEyes tracking capabilities. If this is the first section of the manual that you’re reading, great, thanks for reading this, but you’ll probably need to check out some of the other sections too. At the least, you have to read the Stabilization quick-start. Also, be sure to check the web site for the latest tutorials on stabilization. We apologize in advance for some of the rant content of the following sections, but it s really in your best interest! Why SynthEyes Has a Stabilizer The simple and ordinary need for stabilization arises when you are presented with a shot that is bouncing all over the place, and you need to clean it up into a solid professional-looking shot. That may be all that is needed, or you might need to track it and add 3-D effects also. Moving-camera shots can be challenging to shoot, so having software stabilization can make life easier. Or, you may have some film scans which are to be converted to HD or SD TV resolution, and effects added. People of all skill levels have been using a variety of ad-hoc approaches to address these tasks, sometimes using software designed for this, and sometimes using or abusing compositing software. Sometimes, presumably, this all goes well. But many times it does not a variety of problem shots have been sent to SynthEyes tech support which are just plain bad. You can look at them and see they have been stabilized, and not in a good way. We have developed the SynthEyes stabilizer not only to stabilize shots, but to try to ensure that it is done the right way. How NOT to Stabilize Though it is relatively easy to rig up a node-based compositor to shift footage back and forth to cancel out a tracked motion, this creates a fundamental problem Most imaging software, including you, expects the optic center of an image to fall at the center of that image. Otherwise, it looks weird—the fundamental camera geometry is broken. The optic center might also be called the vanishing point, center of perspective, back focal point, center of lens distortion. For example, think of shooting some footage out of the front of your car as you drive down a highway. Now cut off the right quarter of all the images and look at the sequence. It will be 4 3 footage, but it s going to look strange—the optic center is going to be off to the side. If you combine off-center footage with additional rendered elements, they will have the optic axis at their center, and combined with the different center of the original footage, they will look even worse. So when you stabilize by translating an image in 2-D (and usually zooming a little), you’ve now got an optic center moving all over the place. Right at the point you’ve stabilized, the image looks fine, but the corners will be flying all over the place. It s a very strange effect, it looks funny, and you can’t track it right. If you don’t know what it is, you’ll look at it, and think it looks funny but not know what has hit you. Recommendation if you are going to be adding effects to a shot, you should ask to be the one to stabilize or pan/scan it also. We’ve given you the tool to do it well, and avoid mishap. That s always better than having someone else mangle it, and having to explain later why the shot has problems, or why you really need the original un-stabilized source by yesterday. In-Camera Stabilization Many cameras now feature built-in stabilization, using a variety of operating principles. These stabilizers, while fine for shooting baby s first steps, may not be fine at all for visual effects work. Electronic stabilization uses additional rows and columns of pixels, then shifts the image in 2-D, just like the simple but flawed 2-D compositing approach. These are clearly problematic. One type of optical stabilizer apparently works by putting the camera imaging CCD chip on a little platform with motors, zipping the camera chip around rapidly so it catches the right photons. As amazing as this is, it is clearly just the 2-D compositing approach. Another optical stabilizer type adds a small moving lens in the middle of the collection of simple lens comprising the overall zoom lens. Most likely, the result is equivalent to a 2-D shift in the image plane. A third type uses prismatic elements at the front of the lens. This is more likely to be equivalent to re-aiming the camera, and thus less hazardous to the image geometry. Doubtless additional types are in use and will appear, and it is difficult to know their exact properties. Some stabilizers seem to have a tendency to intermittently jump when confronted with smooth motions. One mitigating factor for in-camera stabilizers, especially electronic, is that the total amount of offset they can accommodate is small—the less they can correct, the less they can mess up. Recommendation It is probably safest to keep camera stabilization off when possible, and keep the shutter time (angle) short to avoid blur, except when the amount of light is limited. Electronic stabilizers have trouble with limited light so that type might have to be off anyway. 3-D Stabilization To stabilize correctly, you need 3-D stabilization that performs “keystone correction” (like a projector does), re-imaging the source at an angle. In effect, your source image is projected onto a screen, then re-shot by a new camera looking in a somewhat different direction with a smaller field of view. Using a new camera keeps the optic center at the center of the image. In order to do this correctly, you always have to know the field of view of the original camera. Fortunately, SynthEyes can tell us that. Stabilization Concepts Point of Interest (POI). The point of interest is the fixed point that is being stabilized. If you are pegging a shot, the point of interest is the one point on the image that never moves. POI Deltas (Adjust tab). These values allow you to intentionally move the POI around, either to help reduce the amount of zoom required, or to achieve a particular framing effect. If you create a rotation, the image rotates around the POI. Stabilization Track. This is roughly the path the POI took—it is a direction in 3-D space, described by pan/tilt/roll angles—basically where the camera (POI) was looking (except that the POI isn’t necessarily at the center of the image). Reference Track. This is the path in 3-D we want the POI to take. If the shot is pegged, then this track is just a single set of values, repeated for the duration of the shot. Separate Field of View Track. The image preparation system has its own field of view track. The image prep s FOV will be larger than main FOV, because the image prep system sees the entire input image, while the main tracking and solving works only on the smaller stabilized sub-window output by image prep. Note that an image prep FOV is needed only for stabilization, not for pixel-level adjustments, downsampling, etc. The Get Solver FOV button transfers the main FOV track to the stabilizer. Separate Distortion Track. Similarly there is a separate lens distortion track. The image prep s distortion can be animated, while the main distortion can not. The image prep distortion or the main distortion should always be zero, they should never both be nonzero simultaneously. The Get Solver Distort button transfers the main distortion value (from solving or the Lens-panel alignment lines) to the stabilizer, and begs you to let it clear the main distortion value afterwards. Stabilization Zoom. The output window can only be a portion of the size of the input image. The more jiggle, the smaller the output portion must be, to be sure that it does not run off the edge of the input (see the Padded mode of the image prep window to see this in action). The zoom factor reflects the ratio of the input and output sizes, and also what is happening to the size of a pixel. At a zoom ratio of 1, the input and output windows and pixels are the same size. At a zoom ratio of 2, the output is half the size of the input, and each incoming pixel has to be stretched to become two pixels in the output, which will look fairly blurry. Accordingly, you want to keep the zoom value down in the 1.1-1.3 region. After an Auto-scale, you can see the required zoom on the Adjust panel. Re-sampling. There s nothing that says we have to produce the same size image going out as coming in. The Output tab lets you create a different output format, though you will have to consider what effect it has on image quality. Re-sampling 3K down to HD sounds good; but re-sampling DV up to HD will come out blurry because the original picture detail is not there. Interpolation Filter. SynthEyes has to create new pixels “in-between” the existing ones. It can do so with different kinds of filtering to prevent aliasing, ranging from the default Bi-Linear to the most complex 3-Lanczos. The bi-linear filter is fastest but produces the softest image. The Lanczos filters take longer, but are sharper—although this can be drawback if the image is noisy. Tracker Paths. One or more trackers are combined to form the stabilization track. The tracker s 2-D paths follow the original footage. After stabilization, they will not match the new stabilized footage. There is a button, Apply to Trkers, that adjusts the tracker paths to match the new footage, but again, they then match that particular footage and they must be restored to match the original footage (with Remove f/Trkers) before making any later changes to the stabilization. If you mess up, you either have to return to an earlier saved file, or re-track. Overall Process We’re ready to walk through the stabilization process. You may want to refer to the Image Preprocessor Reference. · Track the features required for stabilization either a full auto-track, supervised tracking of particular features to be stabilized, or a combination. · If possible, solve the shot either for full 3-D or as a tripod shot, even if it is not truly nodal. The resulting 3-D point locations will make the stabilization more accurate, and it is the best way to get an accurate field of view. · If you have not solved the shot, manually set the Lens FOV on the Image Preprocessor s Lens tab (not the main Lens panel) to the best available value. If you do set up the main lens FOV, you can import it to the Lens tab. · On the Stabilization tab, select a stabilization mode for translation and/or rotation. This will build the stabilization track automatically if there isn’t one already (as if the Get Tracks button was hit), and import the lens FOV if the shot is solved. · Adjust the frequency spinner as desired. · Hit the Auto-Scale button to find the required stabilization zoom · Check the zoom on the Adjust tab; using the Padded view, make any additional adjustment to the stabilization activity to minimize the required zoom, or achieve desired shot framing. · Output the shot. If only stabilized footage is required, you are done. · Update the scene to use the new imagery, and either re-track or update the trackers to account for the stabilization · Get a final 3-D or tripod solve and export to your animation or compositing package for further effects work. There are two main kinds of shots and stabilization for them shots focusing on a subject, which is to remain in the frame, and traveling shots, where the content of the image changes as new features are revealed. Stabilizing on a Subject Often a shot focuses on a single subject, which we want to stabilize in the frame, despite the shaky motion of the camera. Example shots of this type include · The camera person walking towards a mark on the ground, to be turned into a cliff edge for a reveal. · A job site to receive a new building, shot from a helicopter orbiting overhead · A camera car driving by a house, focusing on the house. To stabilize these shots, you will identify or create several trackers in the vicinity of the subject, and with them selected, select the Peg mode on the Translation list on the Stabilize tab. This will cause the point of interest to remain stationary in the image for the duration of the shot. You may also stabilize and peg the image rotation. Almost always, you will want to stabilize rotation. It may or may not be pegged. You may find it helpful to animate the stabilized position of the point of interest, in order to minimize the zoom required, see below, and also to enliven a shot somewhat. Some car commercials are shot from a rig that shows both the car and the surrounding countryside as the car drives they look a bit surreal because the car is completely stationary—having been pegged exactly in place. No real camera rig is that perfect! Stabilizing a Traveling Shot Other shots do not have a single subject, but continue to show new imagery. For example, · A camera car, with the camera facing straight ahead · A forward-facing camera in a helicopter flying over terrain · A camera moving around the corner of a house to reveal the backyard behind it In such shots, there is no single feature to stabilize. Select the Filter mode for the stabilization of translation and maybe rotation. The result is similar to the stabilization done in-camera, though in SynthEyes you can control it and have keystone correction. When the stabilizer is filtering, the Cut Frequency spinner is active. Any vibratory motion below that frequency (in cycles per second) is preserved, and vibratory motion above that frequency is greatly reduced or eliminated. You should adjust the spinner based on the type of motion present, and the degree of stabilization required. A camera mounted on a car with a rigid mount, such as a StickyPod, will have only higher-frequency residual vibration, and a larger value can be used. A hand-held shot will often need a frequency around 0.5 Hz to be smooth. Note When using filter-mode stabilization, the length of the shot matters. If the shot is too short, it is not possible to accurately control the frequency and distinguish between vibration and the desired motion, especially at the beginning and end of the shot. Using a longer version of the take will allow more control, even if much of the stabilized shot is cut after stabilization. Minimizing Zoom The more zoom required to stabilize a shot, the less image quality will result, which is clearly bad. Can we minimize the zoom, and maximize image quality? Of course, and SynthEyes provides the controllability to do so. Stabilizing a shot has considerable flexibility the shot can be stable in lots of different ways, with different amounts of zoom required. We want a shot that everyone agrees is stable, but minimizes the effect on quality. Fortunately, we have the benefit of foresight, so we can correct a problem in the middle of a shot, anticipating it long before it occurs, and provide an apparently stable result. Animating POI The basic technique is to animate the position of the point-of-interest within the frame. If the shot bumps left suddenly, there are fewer pixels available on the left side of the point of interest to be able to maintain its relative position in the output image, and a higher zoom will be required. If we have already moved the point of interest to the left, fewer pixels are required, and less zoom is required. Earlier, in the Stabilization Quick Start, we remarked that the 28% zoom factor obtained by animating the rotation could be reduced further. We’ll continue that example here to show how. Re-do the quick start to completion, go to frame 178, with the Adjust tab open, in Padded display mode, with the make key button turned on. From the display, you can see that the red output-area rectangle is almost near the edge of the image. Grab the purple point-of-interest crosshair, and drag the red rectangle up into the middle of the image. Now everything is a lot safer. If you switch to the stabilize tab and hit Autoscale, the red rectangle enlarges—there is less zoom, as the Adjust tab shows. Only 15% zoom is now required. By dragging the POI/red rectangle, we reduced zoom. You can see that what we did amounted to moving the POI. Hit Undo twice, and switch to the Final view. Drag the POI down to the left, until the Delta U/V values are approximately 0.045 and -0.035. Switch back to the Padded view, and you’ll see you’ve done the same thing as before. The advantage of the padded view is that you can more easily see what you are doing, though you can get a similar effect in the Final view by increasing the margin to about 0.25, where you can see the dashed outline of the source image. If you close the Image Prep dialog and play the shot, you will see the effect of moving the POI a very stable shot, though the apparent subject changes over time. It can make for a more interesting shot and more creative decisions. Too Much of a Good Thing? To be most useful, you can scrub through your shot and look for the worst frame, where the output rectangle has the most missing, and adjust the POI position on that frame. After you do that, there will be some other frame which is now the worst frame. You can go and adjust that too, if you want. As you do this, the zoom required will get less and less. There is a downside as you do this, you are creating more of the shakiness you are trying to get rid of. If you keep going, you could get back to no zoom required, but all the original shakiness, which is of course senseless. Usually, you will only want to create two or three keys at most, unless the shot is very long. But exactly where you stop is a creative decision based on the allowable shakiness and quality impact. Auto-Scale Capabilities The auto-scale button can automate the adjustment process for you, as controlled by the Animate listbox and Maximum auto-zoom settings. With Animate set to Neither, Auto-scale will pick the smallest zoom required to avoid missing pieces on the output image sequence, up to the specified maximum value. If that maximum is reached, there will be missing sections. If you change the Animate setting to Translate, though, Auto-scale will automatically add delta U/V keys, animating the POI position, any time the zoom would have to exceed the maximum. Rewind to the beginning of the shot, and control-right-click the Delta-U spinner, clearing all the position keys. Change the Animate setting to Translate, reduce the Maximum auto-zoom to 1.1, then click Auto-Scale. SynthEyes adds several keys to achieve the maximum 10% zoom. If you play back the sequence, you will see the shot shifting around a bit—10% is probably too low given the amount of jitter in the shot to begin with. The auto-scale button can also animate the zoom track, if enabled with the Animate setting. The result is equivalent to a zooming camera lens, and you must be sure to note that in the main lens panel setting if you will 3-D solve the shot later. This is probably only useful when there is a lot of resolution available to begin with, and the point of interest approaches the boundary of the image at the end of the shot. Keep in mind that the Auto-scale functionality is relatively simple. By considering the purpose of the shot as well as the nature of any problems in it, you should often be able to do better. Tweaking the Point of Interest This is different than moving it! When the selected trackers are combined to form the single overall stabilization track, SynthEyes examines the weight of each tracker, as controlled from the main Tracker panel. This allows you to shift the position of the point-of-interest (POI) within a group of trackers, which can be handy. Suppose you want to stabilize at the location of a single tracker, but you want to stabilize the rotation as well. With a single tracker, rotation can not be stabilized. If you select two trackers, you can stabilize the rotation, but without further action, the point of interest will be sitting between the two trackers, not at the location of the one you care about. To fix this, select the desired POI tracker in the main viewport, and increase its weight value to the maximum (currently 10). Then, select the other tracker(s), and reduce the weight to the minimum (0.050). This will put the POI very close to your main tracker. If you play with the weights a bit, you can make the POI go anywhere within a polygon formed by the trackers. But do not be surprised if the resulting POI seems to be sliding on the image the POI is really a 3-D location, and usually the combination of the trackers will not be on the surface (unless they are all in the same plane). If this is a problem for what you want to do, you should create a supervised tracker at the desired POI location and use that instead. If you have adjusted the weights, and later want to re-solve the scene, you should set the weights back to 1.0 before solving. (Select them all then set the weight to 1). Resampling and Film to HDTV Pan/Scan Workflow If you are working with filmed footage, often you will need to pull the actual usable area from the footage the scan is probably roughly 4 3, but the desired final output is 16 9 or 1.85 or even 2.35, so only part of the filmed image will be used. A director may select the desired portion to achieve a desired framing for the shot. Part of the image may be vignetted and unusable. The image must be cropped to pull out the usable portion of the image with the correct aspect ratio. This cropping operation can be performed as the film is scanned, so that only the desired framing is scanned; clearly this minimizes the scan time and disk storage. But, there is an important reason to scan the entire frame instead. The optic center must remain at the center of the image. If the scanning is done without paying attention, it may be off center, and almost certainly will be if the framing is driven by directorial considerations. If the entire frame is scanned, or at least most of it, then you can use SynthEyes s stabilization software to perform keystone correction, and produce properly centered footage. As a secondary benefit, you can do pan and scan operations to stabilize the shots, or achieve moving framing that would be difficult to do during scanning. With the more complete scan, the final decision can be deferred or changed later in production. The Output tab on the Image Preparation controls resampling, allowing you to output a different image format then that coming in. The incoming resolution should be at least as large as the output resolution, for example, a 3K 4 3 film scan for a 16 9 HDTV image at 1920x1080p. This will allow enough latitude to pull out smaller subimages. If you are resampling from a larger resolution to a smaller one, you should use the Blur setting to minimize aliasing effects (Moire bands). You should consider the effect of how much of the source image you are using before blurring. If you have a zoom factor of 2 into a 3K shot, the effective pixel count being used is only 1.5K, so you probably would not blur if you are producing 1920x1080p HD. Due to the nature of SynthEyes’ integrated image preparation system, the re-sampling, keystone correction, and lens un-distortion all occur simultaneously in the same pass. This presents a vastly improved situation compared to a typical node-based compositor, where the image will be resampled and degraded at each stage. Changing Shots, and Creating Motion in Stills You can use the stabilization system to adjust framing of shots in post-production, or to create motion from still images (the Ken Burns effect). To use the stabilizing engine you have to be stabilizing, so simply animating the Delta controls will not let you pan and scan without the following trick. Delete any the trackers, click the Get Tracks button, and then turn on the Translation channel of the stabilizer. This turns on the stabilizer, making the Delta channels work, without doing any actual stabilization. You must enter a reasonable estimate of the lens field of view. If it is a moving-camera or tripod-mode shot, you can track it first to determine the field of view. Remember to delete the trackers before beginning the mock stabilization. If you are working from a still, you can use the single-frame alignment tool to determine the field of view. You will need to use a text editor to create an IFL file that contains the desired number of copies of your original file name. Stabilization and Interlacing Interlaced footage presents special problems for stabilization, because jitter in the positioning between the two fields is equivalent to jitter in camera position, which we’re trying to remove. Because the two different fields are taken at different points in time (1/30th or 1/25th of a second apart, regardless of shutter time), it is impossible for man or machine to determine what exactly happened, in general. Stabilizing interlaced footage will sacrifice a factor of two in vertical resolution. Best Approach if at all possible, shoot progressive instead of interlace footage. This is a good rule whenever you expect to add effects to a shot. Fallback Approach stabilize slow-moving interlaced shots as if they were progressive. Stabilize rapidly-moving interlaced shots as interlaced. To stabilize interlaced shots, SynthEyes stabilizes each sequence of fields independently. Note that within the image preparation subsystem, some animated tracks are animated by the field, and some are animated by the frame. Frame levels, color/hue, distortion/scale, ROI Field FOV, cut frequency, Delta U/V, Delta Rot, Delta Zoom When you are animating a frame-animated item on an interlaced shot, if you set a key on one field (say 10), you will see the same key on the other field (say 11). This simplifies the situation, at least on these items, if you change a shot from interlaced to progressive or “yes” mode or back. Avoid Slowdowns Due to Missing Keyframes While you are working on stabilizing a shot, you will be re-fetching frames from the source imagery fairly often, especially when you scrub through a shot to check the stabilization. If the source imagery is a QuickTime or AVI that does not have many (or any!) keyframes, random access into the shot will be slow, since the codec will have to decompress all the frames from the last keyframe to get to the one that is needed. This can require repeatedly decompressing the entire shot. It is not a SynthEyes problem, or even specific to stabilizing, but is a problem with the choice of codec settings. If this happens (and it is not uncommon), you should save the movie as an image sequence (with no stabilization), and Shot/Change Shot Images to that version instead. Alternatively, you may be able to assess the situation using the Padded display, turning the update mode to Neither, then scrubbing through the shot. After Stabilizing Once you’ve finished stabilizing the shot, you should write it back out to disk using the Save Sequence button on the Output tab. It is also possible to save the sequence through the Perspective window s Preview Movie capability. Each method has its advantages, but using the Save Sequence button will be generally better for this purpose it is faster; does less to the images; allows you to write the 16 bit version; and allows you to write the alpha channel. However, it does not overlay inserted test objects like the Preview Movie does. You can use the stabilized footage you write for downstream applications such as 3dsmax and Maya. But before you export the camera path and trackers from SynthEyes, you have a little more work to do. The tracker and camera paths in SynthEyes correspond to the original footage, not the stabilized footage, and they are substantially different. Once you close the Image Preparation dialog, you’ll see that the trackers are doing one thing, and the now-stable image doing something else. You should always save the stabilizing SynthEyes scene file at this point for future use in the event of changes. You can then do a File/New, open the stabilized footage, track it, then export the 3-D scene matching the stabilized footage. But… if you have already done a full 3-D track on the original footage, you can save time. Click the Apply to Trkers button on the Output tab. This will apply the stabilization data to the existing trackers. When you close the Image Prep, the 2-D tracker locations will line up correctly, though the 3-D X s will not yet. Go to the solver panel, and re-solve the shot (Go!), and the 3-D positions and camera path will line up correctly again. (If you really wanted to, you could probably use Seed Points mode to speed up this re-solve.) Important if you later decide you want to change the stabilization parameters without re-tracking, you must not have cleared the stabilizer. Hit the Remove f/Trkers button BEFORE making any changes, to get back to the original tracking data. Otherwise, if you Apply twice, or Remove after changes, you will just create a mess. Also, the Blip data is not changed by the Apply or Remove buttons, and it is not possible to Peel any blip trails, which correspond to the original image coordinates, after completing stabilization and hitting Apply. So you must either do all peeling first; remove, peel, and reapply the stabilization; or retrack later if necessary. Flexible Workflows Suppose you have written out a stabilized shot, and adjusted the tracker positions to match the new shot. You can solve the shot, export it, and play around with it in general. If you need to, you can pop the stabilization back off the trackers, adjust the stabilization, fix the trackers back up, and re-solve, all without going back to earlier scene files and thus losing later work. That s the kind of flexibility we like. There s only one slight drawback each time you save and close the file, then reopen it, you’re going to have to wait while the image prep system recomputes the stabilized image. That might be only a few seconds, or it might be quite a while for a long film shot. It s pretty stupid, when you consider that you’ve already written the complete stabilized shot to disk! Approach 1 do a Shot/Change Shot Images to the saved stabilized shot, and reset the image prep system from the Preset Manager. This will let you work quickly from the saved version, but you must be sure to save this scene file separately, in case you need to change the stabilization later for some reason. And of course, going back to that saved file would mean losing later work. Approach 2 Create an image prep preset (“stab”) for the full stabilizer settings. Create another image prep preset (“quick”), and reset it. Do the Shot/Change Shot Images. Now you’ve got it both ways fast loading, and if you need to go back and change the stabilization, switch back to the first (“stab”) preset, remove the stabilization from the trackers, change the shot imagery back to the original footage, then make your stabilization changes. You’ll then need to re-write the new stabilized footage, re-apply it to the trackers, etc. Approach 1 is clearly simpler and should suffice for most simple situations. But if you need the flexibility, Approach 2 will give it to you.
https://w.atwiki.jp/m1000/pages/212.html
<<メモリ管理 ADJUSTALLOC 確保したメモリ領域の操作をする 構文 an = ADJUSTALLOC(a ,off ,f ) パラメータ a メモリ領域の操作を行うアドレス off 変更するサイズ(オフセット値)(byte) f 正の値:増量、負の値:減量(byte) 戻り値 増減に成功した場合は新しいアドレスが返る メモリ領域拡大に失敗した場合は0(メモリ不足) 詳細 指定したアドレスのメモリ領域のサイズの増減を行う。off で指定したサイズだけf がプラスの時は増量、マイナスの時は減量する。 OPL掲示板
https://w.atwiki.jp/xboxonescore/pages/937.html
Quantic Pinball 項目数:12 総ポイント:1000 難易度:☆☆☆☆☆ 発売日:2018年2月9日 価格:$4.99USD Alpha Complete Alpha table 80 Back to Square One Finish all tables until your starting level 140 Beta Complete Beta table 80 Delta Complete Delta table 80 Easier Trigger multiball in Special level 40 Epsilon Complete Epsilon table 80 Gamma Complete Gamma table 80 Master Player Make a 100.000.000 score 60 Quantic tables Unlock all tables 100 Tsunami Play Special level until wave 30 140 Zeta Complete Zeta table 80 Kill Them All Kill all Invaders in Kill Them All mode 40
https://w.atwiki.jp/nicoratch/pages/821.html
概要 4デッキ操作可能のコントローラー。本体は頑丈な金属製でVirtual DJ LE7 4デッキversionとTRAKTOR PRO、TRAKTOR 2に最適化されたシルクスクリーン製テンプレート同梱。 Quark +スペック表・Quark Mixer section Each channel can be assign to a two midi decks (A/C, B/D). High quality ALPHA Crossfader. Long duration ALPHA faders. Three tone band EQ with kill. CUE buttons for monitoring and CUE mix. LED Vu meter LED for modulation level. Player section 2 big jog-wheels with adjustable "Touch Sensitive" system . 4 jog-wheel modes vinyl, CDJ, search and loops adjust. PLAY, CUE, SYNC big buttons. High precision pitch faders. Two slide pitch bend. Multi mode CUE / Auto Loop / Loop Roll (Virtual DJ) and CUE/MOVE/GRID (Traktor). Control over EFFECTS and SAMPLES with 4 rotary buttons (push/dial) and 5 buttons. Master section 1 x Master level button. "Browser" button for easy library search (with two load buttons). Connections USB (slave). Power supply 5 V input. PC System requirements Windows XP (updated SP, 32 bit), Win7 (updated SP, 32 bit/64 bit). Pentium IV / Intel Core 2, 1.8 GHz. 2 GBRAM. CD ROM/DVD drive. MAC System requirements OS X 10.5 or higher. Intel Core Duo family (Only in Intel MACs), 1.66 GHz. 2 GB MB RAM. CD ROM/DVD drive. General Power supply USB 5V, 500mA / DC 6V, 1.75A. Weight 3,10 Kg. Dimensions 358(W) x 229(D) x 64(H) mm. Includes 1 main unit (Quark). 1 External Power supply. 1 USB cable. 1 Akiyama utilities CD. Traktor and Virtual Quick guide. Traktor template. 価格 Quark http //www.akiyamadj.com/product_info1.php?products_model=QUARK Quark SC +スペック表・Quark SC Mixer section Each channel can be assign to a two midi decks (A/C, B/D) High quality ALPHA Crossfader Long duration ALPHA faders Three tone band EQ with kill CUE buttons for monitoring and CUE mix LED Vu meter LED for modulation level Player section 2 big jog-wheels with adjustable "Touch Sensitive" system 4 jog-wheel modes vinyl, CDJ, search and loops adjust. PLAY, CUE, SYNC big buttons. High precision pitch faders Two slide pitch bend Multi mode CUE / Auto Loop / Loop Roll (Virtual DJ) and CUE/MOVE/GRID (Traktor). Control over EFFECTS and SAMPLES with 4 rotary buttons (push/dial) and 5 buttons Master section 1 x Master level button 1 x Headphones level button. 1 x Mixer button CHA/CHB for headphones. 1 x Microphone level button "Browser" button for easy library search (with two load buttons) Connections Two Phono / Line analogic stereo inputs Jack micro 6.3mm, with software routing to apply effects. Jack phones 6.3mm Two master outputs (RCA outputs) USB (slave). Power supply 5 V input PC System requirements Windows XP (updated SP, 32 bit), Win7 (updated SP, 32 bit/64 bit) Pentium IV / Intel Core 2, 1.8 GHz 2 GBRAM CD ROM/DVD drive MAC System requirements OS X 10.5 or higher Intel Core Duo family (Only in Intel MACs), 1.66 GHz 2 GB MB RAM CD ROM/DVD drive Audio Specifications. (Load Line=100Kohm, Headphones=32ohm, potentiometers at maximum, Test signal MP3, 128Kbps, Headphones balance Potentiometers at the limit CHA or CHB) Output level Line OUT 1 2 Typical 0.8V +/-0.5dB , Limit 0.8V +/-1dB , Condition 1kHz, 0dB (TCD-782-TRK2) Headphones Typical 0.3V +/-0.5dB, Limit 0.3V +/-1dB, Condition 1kHz, -20dB (TCD-782-TRK16) Channel Balance Line OUT 1 2 Typical Within 0.5dB, Limit Within 1dB, Condition 1kHz, 0dB (TCD-782-TRK2) Channel Separation L/R (*2) Line OUT 1 2 Typical 85dB, Limit 80dB, Condition 1kHz, 0dB (TCD-782-TRK9 11) THD+N (*1) Line OUT 1 2 Typical 0.02%, Limit 0.05%, Condition 1kHz, 0dB (TCD-782-TRK2) Headphones Typical 0.03%, Limit 0.06%, Condition 1kHz, 0dB (1V output) S/N (*2) Line OUT 1 2 Typical 90dB, Limit 85dB, Condition 1kHz, 0dB (TCD-782-TRK2 8) Frequency response Line OUT 1 2 Typical 17Hz-16kHz +/-0.5dB, Limit 17Hz-16kHz +/1dB, Condition (TCD-781-TRK1,4 16) Max. headphones output Typical 1.4V, Limit 1.3V, Condition 1kHz, 0dB THD=1% Mute Line OUT 1 2 Typical -55dB, Limit -50dB, Condition 1kHz, 0dB (TCD-782-TRK2) Rec/Play Section (master at maximum) a) Input level. Line OUT 1 2 Typical 0.8V+/-1dB, Limit 0.8V +/-1.5dB, Condition Line IN 1kHz +6dBV (2V) Typical 0.8V+/-1dB, Limit 0.8V +/-2dB, Condition Phono IN 1kHz - 32dBV Typical 0.8V+/-1dB, Limit 0.8V +/-2dB, Condition MIC 1kHZ, -36dB (máx.level) b) Frequency response. Line OUT 1 2 Typical 0.8V +/-1dB, Limit 20Hz-20kHz +0/-3dB, Condition Line IN 1kHz +6dBV (2V) Typical 0.8V +/-1dB, Limit 20Hz-20kHz +2/-3dB, Condition Phono IN 1kHz - 50dB (máx.level.) Typical 0.8V +/-1dB, Limit 20Hz-20kHz +/-3dB, Condition MIC 1kHZ, -50dB (máx.level) c) S/N (2*). Line OUT 1 2 Typical 80dB, Limit 76dB, Condition Line IN 1kHz +6dBV (2V). Typical 75dB, Limit 70dB, Condition Phono IN 1kHz - 32dBV. Typical 65db, Limit 60dB, Condition MIC 1kHZ, -50dB (nivel máx.). Notes 1 Low filter at 20kHz 2 Low filter at 20kHz, weighted "IHF-A" 3 All measures have been taken with a device with external power supply. General Power supply USB 5V, 500mA / DC 6V, 1.75A Consumption 15W Use temperature +5°C - +35°C Weight 3,30 Kg. Dimensions 358(W) x 229(D) x 64(H) mm Dimensions with connections and protections 358(W) x 233,5(D) x 64(H) mm Includes 1 main unit (Quark SC) 1 external power supply 1 USB cable 1 Akiyama utilities CD Traktor and Virtual Quick guide Traktor template 価格 €159.00 Quark SC http //www.akiyamadj.com/akiyama+quark+sc+2+channels+midi+controller.-p-QUARK-SC.html?cPath=115_117